Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
J Clin Epidemiol ; 105: 136-141, 2019 01.
Article in English | MEDLINE | ID: mdl-30223065

ABSTRACT

BACKGROUND AND OBJECTIVE: Diagnostic and prognostic prediction models often perform poorly when externally validated. We investigate how differences in the measurement of predictors across settings affect the discriminative power and transportability of a prediction model. METHODS: Differences in predictor measurement between data sets can be described formally using a measurement error taxonomy. Using this taxonomy, we derive an expression relating variation in the measurement of a continuous predictor to the area under the receiver operating characteristic curve (AUC) of a logistic regression prediction model. This expression is used to demonstrate how variation in measurements across settings affects the out-of-sample discriminative ability of a prediction model. We illustrate these findings with a diagnostic prediction model using example data of patients suspected of having deep venous thrombosis. RESULTS: When a predictor, such as D-dimer, is measured with more noise in one setting compared to another, which we conceptualize as a difference in "classical" measurement error, the expected value of the AUC decreases. In contrast, constant, "structural" measurement error does not impact on the AUC of a logistic regression model, provided the magnitude of the error is the same among cases and noncases. As the differences in measurement methods between settings (and in turn differences in measurement error structures) become more complex, it becomes increasingly difficult to predict how the AUC will differ between settings. CONCLUSION: When a prediction model is applied to a different setting to the one in which it was developed, its discriminative ability can decrease or even increase if the magnitude or structure of the errors in predictor measurements differ between the two settings. This provides an important starting point for researchers to better understand how differences in measurement methods can affect the performance of a prediction model when externally validating or implementing it in practice.


Subject(s)
Models, Statistical , Prognosis , ROC Curve , Analysis of Variance , Bias , Humans , Reproducibility of Results , Risk Assessment/methods
2.
Br J Anaesth ; 122(1): 60-68, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30579407

ABSTRACT

BACKGROUND: Delirium is frequently unrecognised. EEG shows slower frequencies (i.e. below 4 Hz) during delirium, which might be useful in improving delirium recognition. We studied the discriminative performance of a brief single-channel EEG recording for delirium detection in an independent cohort of patients. METHODS: In this prospective, multicentre study, postoperative patients aged ≥60 yr were included (n=159). Before operation and during the first 3 postoperative days, patients underwent a 5-min EEG recording, followed by a video-recorded standardised cognitive assessment. Two or, in case of disagreement, three delirium experts classified each postoperative day based on the video and chart review. Relative delta power (1-4 Hz) was based on 1-min artifact-free EEG. The diagnostic value of the relative delta power was evaluated by the area under the receiver operating characteristic curve (AUROC), using the expert classification as the gold standard. RESULTS: Experts classified 84 (23.3%) postoperative days as either delirium or possible delirium, and 276 (76.7%) non-delirium days. The AUROC of the relative EEG delta power was 0.75 [95% confidence interval (CI) 0.69-0.82]. Exploratory analysis showed that relative power from 1 to 6 Hz had significantly higher AUROC (0.78, 95% CI 0.72-0.84, P=0.014). CONCLUSIONS: Delirium/possible delirium can be detected in older postoperative patients based on a single-channel EEG recording that can be automatically analysed. This objective detection method with a continuous scale instead of a dichotomised outcome is a promising approach for routine detection of delirium. CLINICAL TRIAL REGISTRATION: NCT02404181.


Subject(s)
Delirium/diagnosis , Postoperative Complications/diagnosis , Aged , Aged, 80 and over , Algorithms , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Monitoring, Physiologic/methods , Postoperative Care/methods , ROC Curve , Reproducibility of Results , Signal Processing, Computer-Assisted
3.
Br J Anaesth ; 120(5): 1080-1089, 2018 May.
Article in English | MEDLINE | ID: mdl-29661385

ABSTRACT

BACKGROUND: Associations between intraoperative hypotension (IOH) and postoperative complications have been reported. We examined whether using different methods to model IOH affected the association with postoperative myocardial injury (POMI) and acute kidney injury (AKI). METHODS: This two-centre cohort study included 10 432 patients aged ≥50 yr undergoing non-cardiac surgery. Twelve different methods to statistically model IOH [representing presence, depth, duration, and area under the threshold (AUT)] were applied to examine the association with POMI and AKI using logistic regression analysis. To define IOH, eight predefined thresholds were chosen. RESULTS: The incidences of POMI and AKI were 14.9% and 14.8%, respectively. Different methods to model IOH yielded effect estimates differing in size and statistical significance. Methods with the highest odds were absolute maximum decrease in blood pressure (BP) and mean episode AUT, odds ratio (OR) 1.43 [99% confidence interval (CI): 1.15-1.77] and OR 1.69 (99% CI: 0.99-2.88), respectively, for the absolute mean arterial pressure 50 mm Hg threshold. After standardisation, the highest standardised ORs were obtained for depth-related methods, OR 1.12 (99% CI: 1.05-1.20) for absolute and relative maximum decrease in BP. No single method always yielded the highest effect estimate in every setting. However, methods with the highest effect estimates remained consistent across different BP types, thresholds, outcomes, and centres. CONCLUSIONS: In studies on IOH, both the threshold to define hypotension and the method chosen to model IOH affects the association of IOH with outcome. This makes different studies on IOH less comparable and hampers clinical application of reported results.


Subject(s)
Acute Kidney Injury/epidemiology , Hypotension/epidemiology , Intraoperative Complications/epidemiology , Myocardial Infarction/epidemiology , Postoperative Complications/epidemiology , Surgical Procedures, Operative , Aged , Cohort Studies , Comorbidity , Female , Humans , Male , Models, Theoretical , Netherlands/epidemiology , Retrospective Studies
4.
Ned Tijdschr Geneeskd ; 161: D1885, 2017.
Article in Dutch | MEDLINE | ID: mdl-29192572

ABSTRACT

OBJECTIVE: To determine the degree of agreement between delirium experts on the diagnosis of delirium based on exactly the same information, and to assess the sensitivity of delirium screening methods used by clinical nurses. DESIGN: Prospective observational longitudinal study. METHOD: Older patients (≥ 60 years) who underwent major surgery were included. During the first three days after surgery they had a standardised cognitive screening test which was recorded on video. Two delirium experts independently evaluated these videos and the information from the patient records. They classified the patients as having 'no delirium', 'possible delirium' or 'delirium'. If there was disagreement, a third expert was consulted. The final classification, based on consensus of two or three delirium experts, was compared with the result of the delirium screening carried out by the clinical nurses. RESULTS: A total of 167 patients were included and 424 postoperative classifications were obtained. The agreement between the experts was 0.61 (95% confidence interval (CI): 0.53-0.68), based on Cohen's kappa. In 89 (21.0%) of the postoperative classifications there was no agreement between the experts and a third expert was consulted. The nurses using the delirium screening tools recognised 32% of the cases that had been classified as delirium by the experts. CONCLUSION: There was considerable disagreement between the classifications of individual delirium experts, based on exactly the same information, indicating the difficulty of the diagnosis. Furthermore, the sensitivity of the delirium screening tools used by the clinical nurses was poor. Further research should focus on the development of objective methods for recognising delirium.

5.
Br J Anaesth ; 119(4): 637-644, 2017 Oct 01.
Article in English | MEDLINE | ID: mdl-29121297

ABSTRACT

BACKGROUND: The inflammatory response to surgery varies considerably between individual patients. Age might be a substantial factor in this variability. Our objective was to examine the association of patient age and other potential risk factors with the occurrence of a postoperative systemic inflammatory response syndrome, during the first 24 h after cardiac surgery. METHODS: This was a retrospective cohort study, using linked data from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) Database and the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database. Data from patients who underwent coronary artery bypass grafting and/or valve surgery were used. The association between age and postoperative SIRS was analysed using Poisson regression, and corrected for other risk factors. Restricted cubic splines were used to determine relevant age categories. Results are expressed as risk ratios (RR) with 95% confidence intervals (CI). RESULTS: Data from 28 513 patients were used. In both univariable and multivariable models, increased patient age was strongly associated with reduced postoperative SIRS prevalence. Using 73-83 yr as the reference category, the RRs (95% CI) for the age categories were 1.38 (1.28-1.49) for ≤43 yr, 1.15 (1.09-1.20) for 44-63 yr, 1.05 (1.00-1.09) for 64-72 yr, and 1.03 (0.94-1.12) for >83 yr, respectively. The predictive value for postoperative SIRS of the final model, however, was moderate (c-statistic: 0.61). CONCLUSIONS: We have demonstrated that advanced patient age is associated with a decreased risk of postoperative SIRS among cardiac surgery patients, where patients aged over 72 yr had the lowest risk.


Subject(s)
Cardiac Surgical Procedures , Systemic Inflammatory Response Syndrome/epidemiology , Adult , Age Factors , Aged , Aged, 80 and over , Australia/epidemiology , Cohort Studies , Databases, Factual , Female , Humans , Male , Middle Aged , New Zealand/epidemiology , Perioperative Period , Postoperative Complications , Prevalence , Retrospective Studies , Risk Factors
6.
Neth Heart J ; 25(3): 200-206, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27882524

ABSTRACT

AIMS: Acute aortic dissection (AD) requires immediate treatment, but is a diagnostic challenge. We studied how often AD was missed initially, which patients were more likely to be missed and how this influenced patient management and outcomes. METHODS: A retrospective cohort study including 200 consecutive patients with AD as the final diagnosis, admitted to a tertiary hospital between 1998 and 2008. The first differential diagnosis was identified and patients with and without AD included were compared. Characteristics associated with a lower level of suspicion were identified using multivariable logistic regression, and Cox regression was used for survival analyses. Missing data were imputed. RESULTS: Mean age was 63 years, 39% were female and 76% had Stanford type A dissection. In 69% of patients, AD was included in the first differential diagnosis; this was less likely in women (adjusted relative risk [aRR]: 0.66, 95% CI: 0.44-0.99), in the absence of back pain (aRR: 0.51, 95% CI: 0.30-0.84), and in patients with extracardiac atherosclerosis (aRR: 0.64, 95% CI: 0.43-0.96). Absence of AD in the differential diagnosis was associated with the use of more imaging tests (1.8 vs. 2.3, p = 0.01) and increased time from admission to surgery (1.8 vs. 10.1 h, p < 0.01), but not with a difference in the adjusted long-term all-cause mortality (hazard ratio: 0.76, 95% CI: 0.46-1.27). CONCLUSION: Acute aortic dissection was initially not suspected in almost one-third of patients, this was more likely in women, in the absence of back pain and in patients with extracardiac atherosclerosis. Although the number of imaging tests was higher and time to surgery longer, patient outcomes were similar in both groups.

7.
Eur J Vasc Endovasc Surg ; 51(4): 473-80, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26553374

ABSTRACT

OBJECTIVE: Myocardial infarction (MI) is a frequent complication of carotid endarterectomy (CEA), yet most events are silent. Routine post-operative monitoring of cardiac troponin was implemented to facilitate timely recognition of MI and stratify high risk patients. The aim was to evaluate the incidence of troponin elevation after CEA and its association with adverse cardiovascular events. METHODS: This analysis included patients ≥60 years old who underwent CEA, whose troponin-I levels were routinely monitored post-operatively and were included in a cohort study that assessed clinical outcomes. A clinical troponin cutoff of 60 ng/L was used. The primary endpoint was the composite of MI, stroke, and cardiovascular death. Secondary endpoints were MI, stroke, coronary intervention, cardiovascular death, and all cause death. RESULTS: 225 consecutive patients were included in the analysis. Troponin elevation occurred in 34 patients (15%) and a post-operative MI was diagnosed in eight patients. After a median follow up of 1.8 years (IQR 1.0-2.6), the primary endpoint occurred in 29% of patients with troponin elevation versus 6.3% without (HR 5.6, 95% CI 2.4-13), MI in 24% versus 1.6% (HR 18.0, 95% CI 4.7-68), stroke in 5.9% versus 4.2% (HR 1.4, 95% CI 0.3-6.7), coronary intervention in 5.9% versus 2.6% (HR 2.7, 95% CI 0.5-14), cardiovascular death in 5.9% versus 0.5% (HR 11.8, 95% CI 1.1-131), and all cause death in 15% versus 5.8% (HR 3.0, 95% CI 1.0-8.7), respectively. Incidences of the primary endpoint and all cause mortality in patients with a post-operative MI versus "troponin only" were 25% versus 7.7% and 25% versus 12%, respectively. CONCLUSION: Troponin elevation after CEA occurred in 15% of patients. The incidence of adverse cardiovascular events was significantly higher in patients with troponin elevation, which was mainly attributable to silent non-ST segment elevation MIs that occurred in the early post-operative phase.


Subject(s)
Carotid Artery Diseases/surgery , Endarterectomy, Carotid/adverse effects , Myocardial Infarction/blood , Troponin I/blood , Aged , Aged, 80 and over , Biomarkers/blood , Carotid Artery Diseases/diagnosis , Carotid Artery Diseases/mortality , Databases, Factual , Endarterectomy, Carotid/mortality , Female , Humans , Incidence , Kaplan-Meier Estimate , Longitudinal Studies , Male , Middle Aged , Myocardial Infarction/diagnosis , Myocardial Infarction/etiology , Myocardial Infarction/mortality , Myocardial Infarction/therapy , Risk Factors , Stroke/etiology , Time Factors , Treatment Outcome , Up-Regulation
8.
J Clin Monit Comput ; 30(6): 797-805, 2016 Dec.
Article in English | MEDLINE | ID: mdl-26424541

ABSTRACT

Altered respiratory rate is one of the first symptoms of medical conditions that require timely intervention, e.g., sepsis or opioid-induced respiratory depression. To facilitate continuous respiratory rate monitoring on general hospital wards a contactless, non-invasive, prototype monitor was developed using frequency modulated continuous wave radar. We aimed to study whether radar can reliably measure respiratory rate in postoperative patients. In a diagnostic cross-sectional study patients were monitored with the radar and the reference monitor (pneumotachograph during mechanical ventilation and capnography during spontaneous breathing). Eight patients were included; yielding 796 min of observation time during mechanical ventilation and 521 min during spontaneous breathing. After elimination of movement artifacts the bias and 95 % limits of agreement for mechanical ventilation and spontaneous breathing were -0.12 (-1.76 to 1.51) and -0.59 (-5.82 to 4.63) breaths per minute respectively. The radar was able to accurately measure respiratory rate in mechanically ventilated patients, but the accuracy decreased during spontaneous breathing.


Subject(s)
Monitoring, Physiologic/methods , Radar , Respiration, Artificial/methods , Respiratory Rate , Adult , Algorithms , Artifacts , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Movement , Postoperative Period , Reproducibility of Results , Respiration , Respiratory Insufficiency , Signal Processing, Computer-Assisted , Wireless Technology
9.
Eur J Pain ; 19(7): 929-39, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25413847

ABSTRACT

BACKGROUND: A large cohort study recently reported high pain scores after caesarean section (CS). The aim of this study was to analyse how pain after CS interferes with patients' activities and to identify possible causes of insufficient pain treatment. METHODS: We analysed pain scores, pain-related interferences (with movement, deep breathing, mood and sleep), analgesic techniques, analgesic consumption, adverse effects and the wish to have received more analgesics during the first 24 h after surgery. To better evaluate the severity of impairment by pain, the results of CS patients were compared with those of patients undergoing hysterectomy. RESULTS: CS patients (n = 811) were compared with patients undergoing abdominal, laparoscopic-assisted vaginal or vaginal hysterectomy (n = 2406, from 54 hospitals). Pain intensity, wish for more analgesics and most interference outcomes were significantly worse after CS compared with hysterectomies. CS patients with spinal or general anaesthesia and without patient-controlled analgesia (PCA) received significantly less opioids on the ward (62% without any opioid) compared with patients with PCA (p < 0.001). Patients with PCA reported pain-related interference with movement and deep breathing between 49% and 52% compared with patients without PCA (between 68% and 73%; p-values between 0.004 and 0.013; not statistically significant after correction for multiple testing). CONCLUSION: In daily clinical practice, pain after CS is much higher than previously thought. Pain management was insufficient compared with patients undergoing hysterectomy. Unfavourable outcome was mainly associated with low opioid administration after CS. Contradictory pain treatment guidelines for patients undergoing CS and for breastfeeding mothers might contribute to reluctance of opioid administration in CS patients.


Subject(s)
Cesarean Section , Pain, Postoperative/therapy , Adult , Analgesia, Patient-Controlled , Analgesics/adverse effects , Analgesics/therapeutic use , Anesthesia, Obstetrical , Cohort Studies , Female , Humans , Hysterectomy , Pain Management , Pain Measurement , Pregnancy , Sleep , Surveys and Questionnaires , Treatment Outcome
10.
Heart ; 101(3): 222-9, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25256148

ABSTRACT

OBJECTIVE: Various cardiovascular prediction models have been developed for patients with type 2 diabetes. Their predictive performance in new patients is mostly not investigated. This study aims to quantify the predictive performance of all cardiovascular prediction models developed specifically for diabetes patients. DESIGN AND METHODS: Follow-up data of 453, 1174 and 584 type 2 diabetes patients without pre-existing cardiovascular disease (CVD) in the EPIC-NL, EPIC-Potsdam and Secondary Manifestations of ARTerial disease cohorts, respectively, were used to validate 10 prediction models to estimate risk of CVD or coronary heart disease (CHD). Discrimination was assessed by the c-statistic for time-to-event data. Calibration was assessed by calibration plots, the Hosmer-Lemeshow goodness-of-fit statistic and expected to observed ratios. RESULTS: There was a large variation in performance of CVD and CHD scores between different cohorts. Discrimination was moderate for all 10 prediction models, with c-statistics ranging from 0.54 (95% CI 0.46 to 0.63) to 0.76 (95% CI 0.67 to 0.84). Calibration of the original models was poor. After simple recalibration to the disease incidence of the target populations, predicted and observed risks were close. Expected to observed ratios of the recalibrated models ranged from 1.06 (95% CI 0.81 to 1.40) to 1.55 (95% CI 0.95 to 2.54), mainly driven by an overestimation of risk in high-risk patients. CONCLUSIONS: All 10 evaluated models had a comparable and moderate discriminative ability. The recalibrated, but not the original, prediction models provided accurate risk estimates. These models can assist clinicians in identifying type 2 diabetes patients who are at low or high risk of developing CVD.


Subject(s)
Cardiovascular Diseases/etiology , Diabetes Mellitus, Type 2/complications , Models, Cardiovascular , Risk Assessment , Cardiovascular Diseases/epidemiology , Global Health , Humans , Risk Factors
11.
Diabetes Obes Metab ; 16(5): 426-32, 2014 May.
Article in English | MEDLINE | ID: mdl-24251579

ABSTRACT

AIMS: The aim of this study was to assess associations between patient characteristics, intensification of blood glucose-lowering treatment through oral glucose-lowering therapy and/or insulin and effective glycaemic control in type 2 diabetes. METHODS: 11 140 patients from the Action in Diabetes and Vascular disease: preterAx and diamicroN-MR Controlled Evaluation (ADVANCE) trial who were randomized to intensive glucose control or standard glucose control and followed up for a median of 5 years were categorized into two groups: effective glycaemic control [haemoglobin A1c (HbA1c) ≤ 7.0% or a proportionate reduction in HbA1c over 10%] or ineffective glycaemic control (HbA1c > 7.0% and a proportionate reduction in HbA1c less than or equal to 10%). Therapeutic intensification was defined as addition of an oral glucose-lowering agent or commencement of insulin. Pooled logistic regression models examined the associations between patient factors, intensification and effective glycaemic control. RESULTS: A total of 7768 patients (69.7%), including 3198 in the standard treatment group achieved effective glycaemic control. Compared to patients with ineffective control, patients with effective glycaemic control had shorter duration of diabetes and lower HbA1c at baseline and at the time of treatment intensification. Treatment intensification with addition of an oral agent or commencement of insulin was associated with a 107% [odds ratio, OR: 2.07 (95% confidence interval, CI: 1.95-2.20)] and 152% [OR: 2.52 (95% CI: 2.30-2.77)] greater chance of achieving effective glycaemic control, respectively. These associations were robust after adjustment for several baseline characteristics and not modified by the number of oral medications taken at the time of treatment intensification. CONCLUSIONS: Effective glycaemic control was associated with treatment intensification at lower HbA1c levels at all stages of the disease course and in both arms of the ADVANCE trial.


Subject(s)
Blood Glucose/drug effects , Diabetes Mellitus, Type 2/drug therapy , Glycated Hemoglobin/drug effects , Hypoglycemic Agents/administration & dosage , Administration, Oral , Blood Glucose/metabolism , Diabetes Mellitus, Type 2/blood , Drug Administration Schedule , Female , Follow-Up Studies , Glycated Hemoglobin/metabolism , Humans , Logistic Models , Male , Middle Aged , Quality of Life , Treatment Outcome
12.
BMJ ; 347: f5913, 2013 Oct 21.
Article in English | MEDLINE | ID: mdl-24144869

ABSTRACT

OBJECTIVES: To assess the consequences of applying different mortality timeframes on standardised mortality ratios of individual hospitals and, secondarily, to evaluate the association between in-hospital standardised mortality ratios and early post-discharge mortality rate, length of hospital stay, and transfer rate. DESIGN: Retrospective analysis of routinely collected hospital data to compare observed deaths in 50 diagnostic categories with deaths predicted by a case mix adjustment method. SETTING: 60 Dutch hospitals. PARTICIPANTS: 1 228 815 patients discharged in the period 2008 to 2010. MAIN OUTCOME MEASURES: In-hospital standardised mortality ratio, 30 days post-admission standardised mortality ratio, and 30 days post-discharge standardised mortality ratio. RESULTS: Compared with the in-hospital standardised mortality ratio, 33% of the hospitals were categorised differently with the 30 days post-admission standardised mortality ratio and 22% were categorised differently with the 30 days post-discharge standardised mortality ratio. A positive association was found between in-hospital standardised mortality ratio and length of hospital stay (Pearson correlation coefficient 0.33; P=0.01), and an inverse association was found between in-hospital standardised mortality ratio and early post-discharge mortality (Pearson correlation coefficient -0.37; P=0.004). CONCLUSIONS: Applying different mortality timeframes resulted in differences in standardised mortality ratios and differences in judgment regarding the performance of individual hospitals. Furthermore, associations between in-hospital standardised mortality rates, length of stay, and early post-discharge mortality rates were found. Combining these findings suggests that standardised mortality ratios based on in-hospital mortality are subject to so-called "discharge bias." Hence, early post-discharge mortality should be included in the calculation of standardised mortality ratios.


Subject(s)
Benchmarking/methods , Hospital Mortality , Hospitals/standards , Patient Discharge , Databases, Factual , Female , Humans , Length of Stay/statistics & numerical data , Male , Middle Aged , Netherlands , Patient Transfer/statistics & numerical data , Registries , Retrospective Studies , Risk Adjustment , Time Factors
13.
Ann Surg ; 255(1): 44-9, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22123159

ABSTRACT

OBJECTIVE: To evaluate the effect of implementation of the WHO's Surgical Safety Checklist on mortality and to determine to what extent the potential effect was related to checklist compliance. BACKGROUND: Marked reductions in postoperative complications after implementation of a surgical checklist have been reported. As compliance to the checklists was reported to be incomplete, it remains unclear whether the benefits obtained were through actual completion of a checklist or from an increase in overall awareness of patient safety issues. METHODS: This retrospective cohort study included 25,513 adult patients undergoing non-day case surgery in a tertiary university hospital. Hospital administrative data and electronic patient records were used to obtain data. In-hospital mortality within 30 days after surgery was the main outcome and effect estimates were adjusted for patient characteristics, surgical specialty and comorbidity. RESULTS: After checklist implementation, crude mortality decreased from 3.13% to 2.85% (P = 0.19). After adjustment for baseline differences, mortality was significantly decreased after checklist implementation (odds ratio [OR] 0.85; 95% CI, 0.73-0.98). This effect was strongly related to checklist compliance: the OR for the association between full checklist completion and outcome was 0.44 (95% CI, 0.28-0.70), compared to 1.09 (95% CI, 0.78-1.52) and 1.16 (95% CI, 0.86-1.56) for partial or noncompliance, respectively. CONCLUSIONS: Implementation of the WHO Surgical Checklist reduced in-hospital 30-day mortality. Although the impact on outcome was smaller than previously reported, the effect depended crucially upon checklist compliance.


Subject(s)
Checklist/standards , Hospital Mortality/trends , Patient Safety/standards , World Health Organization , Adult , Aged , Checklist/statistics & numerical data , Cohort Studies , Female , Guideline Adherence/statistics & numerical data , Guideline Adherence/trends , Health Plan Implementation/organization & administration , Hospitals, University , Humans , Male , Middle Aged , Netherlands , Odds Ratio , Outcome Assessment, Health Care/statistics & numerical data , Retrospective Studies , Survival Rate , Utilization Review
14.
Heart ; 98(5): 360-9, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22184101

ABSTRACT

CONTEXT: A recent overview of all CVD models applicable to diabetes patients is not available. OBJECTIVE: To review the primary prevention studies that focused on the development, validation and impact assessment of a cardiovascular risk model, scores or rules that can be applied to patients with type 2 diabetes. DESIGN: Systematic review. DATA SOURCES: Medline was searched from 1966 to 1 April 2011. STUDY SELECTION: A study was eligible when it described the development, validation or impact assessment of a model that was constructed to predict the occurrence of cardiovascular disease in people with type 2 diabetes, or when the model was designed for use in the general population but included diabetes as a predictor. DATA EXTRACTION: A standardized form was sued to extract all data of the CVD models. RESULTS: 45 prediction models were identified, of which 12 were specifically developed for patients with type 2 diabetes. Only 31% of the risk scores has been externally validated in a diabetes population, with an area under the curve ranging from 0.61 to 0.86 and 0.59 to 0.80 for models developed in a diabetes population and in the general population, respectively. Only one risk score has been studied for its effect on patient management and outcomes. 10% of the risk scores are advocated in national diabetes guidelines. CONCLUSION: Many cardiovascular risk scores are available that can be applied to patients with type 2 diabetes. A minority of these risk scores has been validated and tested for its predictive accuracy, with only a few showing a discriminative value of ≥0.80. The impact of applying these risk scores in clinical practice is almost completely unknown, but their use is recommended in various national guidelines.


Subject(s)
Cardiovascular Diseases , Models, Cardiovascular , Primary Prevention , Risk Assessment/methods , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Cardiovascular Diseases/prevention & control , Diabetes Mellitus, Type 2/complications , Diabetes Mellitus, Type 2/epidemiology , Global Health , Humans , Incidence , Prognosis , Risk Factors
15.
Early Hum Dev ; 87(3): 183-91, 2011 Mar.
Article in English | MEDLINE | ID: mdl-21220192

ABSTRACT

BACKGROUND: Extremely low birth weight (ELBW) infants are at risk of cognitive impairment and follow-up is therefore of major importance. The age at which their neurodevelopmental outcome (NDO) can reliably be predicted differs in the literature. AIMS: To describe NDO at 2, 3.5 and 5.5 years in an ELBW cohort. To examine the value of NDO at 2 years corrected age (CA) for prediction of NDO at 3.5 and 5.5 years. STUDY DESIGN: A retrospective cross-sectional and longitudinal cohort study. SUBJECTS: 101 children with a BW≤750 g, born between 1996 and 2005, who survived NICU admission and were included in a follow-up program. OUTCOME MEASURES: NDO, measured with different tests for general development and intelligence, depending on age of assessment and classified as normal (Z-score≥-1), mildly delayed (-2≤Z-score<-1) or severely delayed (Z-score<-2). RESULTS: At 2, 3.5 and 5.5 years 74.3, 82.2 and 76.2% had a normal NDO. A normal NDO at 2 years CA predicted a normal NDO at 3.5 and 5.5 years in 92% and 84% respectively. Of the children with a mildly or severely delayed NDO at 2 years CA the majority showed an improved NDO at 3.5 (69.2%) and 5.5 years (65.4%) respectively. CONCLUSIONS: The majority of the children with a BW≤750 g had a normal NDO at all ages. A normal NDO at 2 years CA is a good predictor for normal outcome at 3.5 and 5.5 years, whereas a delayed NDO at 2 years CA is subject to change with the majority of the children showing a better NDO at 3.5 and 5.5 years.


Subject(s)
Child Development/physiology , Cognition/physiology , Developmental Disabilities/physiopathology , Infant, Very Low Birth Weight/physiology , Chi-Square Distribution , Cohort Studies , Cross-Sectional Studies , Female , Follow-Up Studies , Humans , Infant, Newborn , Infant, Premature , Infant, Very Low Birth Weight/psychology , Longitudinal Studies , Predictive Value of Tests , Pregnancy
16.
Diabetologia ; 54(2): 264-70, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21076956

ABSTRACT

AIMS/HYPOTHESIS: Treatment guidelines recommend the UK Prospective Diabetes Study (UKPDS) risk engine for predicting cardiovascular risk in patients with type 2 diabetes, although validation studies showed moderate performance. The methods used in these validation studies were diverse, however, and sometimes insufficient. Hence, we assessed the discrimination and calibration of the UKPDS risk engine to predict 4, 5, 6 and 8 year cardiovascular risk in patients with type 2 diabetes. METHODS: The cohort included 1,622 patients with type 2 diabetes. During a mean follow-up of 8 years, patients were followed for incidence of CHD and cardiovascular disease (CVD). Discrimination and calibration were assessed for 4, 5, 6 and 8 year risk. Discrimination was examined using the c-statistic and calibration by visually inspecting calibration plots and calculating the Hosmer-Lemeshow χ(2) statistic. RESULTS: The UKPDS risk engine showed moderate to poor discrimination for both CHD and CVD (c-statistic of 0.66 for both 5 year CHD and CVD risks), and an overestimation of the risk (224% and 112%). The calibration of the UKPDS risk engine was slightly better for patients with type 2 diabetes who had been diagnosed with diabetes more than 10 years ago compared with patients diagnosed more recently, particularly for 4 and 5 year predicted CVD and CHD risks. Discrimination for these periods was still moderate to poor. CONCLUSIONS/INTERPRETATION: We observed that the UKPDS risk engine overestimates CHD and CVD risk. The discriminative ability of this model is moderate, irrespective of various subgroup analyses. To enhance the prediction of CVD in patients with type 2 diabetes, this model should be updated.


Subject(s)
Diabetes Mellitus, Type 2/complications , Adult , Aged , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Coronary Disease/epidemiology , Coronary Disease/etiology , Female , Humans , Male , Middle Aged , Risk Factors , Young Adult
17.
Arch Dis Child Fetal Neonatal Ed ; 96(3): F169-77, 2011 May.
Article in English | MEDLINE | ID: mdl-20530098

ABSTRACT

OBJECTIVES: To describe 2-year neurodevelopmental outcome (NDO) in a cohort of extremely low birthweight infants, and compare NDO between two consecutive 5-year periods and between appropriate (AGA, ≥p10) and small for gestational age (SGA, -1 Z score ≤-2) or severely delayed (Z score >-2). RESULTS: 74.3% of the children had a normal NDO at 2 years corrected age, 20.8% a mildly and 5% a severely delayed outcome. Although survival significantly increased with time (65.8% to 88.1%, p=0.002), significantly fewer children in cohort II (66.1% vs 84.4% in cohort I, p=0.042) as well as fewer SGA children (64.3% vs 86.7% of AGA children, p=0.012) had a normal NDO. CONCLUSIONS: Increased survival of infants with a birth weight ≤750 g coincided with more children with an impaired NDO at 2 years corrected age. SGA infants are especially at risk of impaired NDO.


Subject(s)
Developmental Disabilities/etiology , Infant, Premature/psychology , Birth Weight , Epidemiologic Methods , Female , Gestational Age , Humans , Infant Care/methods , Infant, Newborn , Infant, Small for Gestational Age/psychology , Infant, Very Low Birth Weight/psychology , Intensive Care Units, Neonatal , Male , Prognosis , Psychometrics
18.
Br J Anaesth ; 105(5): 620-6, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20682570

ABSTRACT

BACKGROUND: During preanaesthesia evaluation at an outpatient clinic, information is summarized and structured. We aimed to estimate the effectiveness of this process by determining the proportion of patients presenting for surgery who had valid preoperative anaesthesia assessment records, and also the proportion of patients with a record that contained sufficient information. METHODS: This study included all non-cardiac surgery procedures performed in adults in 2006 and 2007 in a university hospital. In each case, we asked the anaesthesiologist actually providing anaesthesia to assess the quality of the preoperative record on an electronic feedback form showing a predefined drop down menu and a free text box. The primary outcome was the proportion of procedures with a valid record (< 6 months old) that also contained sufficient and adequate information to safely provide anaesthesia. Secondly, all predefined remarks were assessed for relevance and the proportion of (relevant) remarks per individual anaesthesiologist was calculated. RESULTS: During the study period, 21 454 procedures were performed. A valid record was available in 20 700 procedures (97%). In 459 (2%) cases, a remark (mostly about not detected comorbidity) was given by the anaesthesia provider, of which 347 (76%) were assessed as 'relevant', resulting in 20 353 (95%) valid records containing sufficient and adequate information. Between individual anaesthesiologists, the percentage remarks ranged from 0.4% to 12.7%. CONCLUSIONS: On entering the operating theatre, 95% of elective surgery patients had a preanaesthesia evaluation record that contained sufficient and adequate information to safely provide anaesthesia. There was large variability in reporting remarks.


Subject(s)
Medical Records/standards , Outpatient Clinics, Hospital/standards , Preoperative Care/standards , Quality of Health Care , Adolescent , Adult , Aged , Aged, 80 and over , Attitude of Health Personnel , Cohort Studies , Comorbidity , Feedback , Female , Hospitals, University/standards , Humans , Male , Middle Aged , Netherlands , Patient Care Planning/standards , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...